Convergence Rate of Incremental Gradient and Incremental Newton Methods

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Convergence rate of incremental aggregated gradient algorithms

Motivated by applications to distributed asynchronous optimization and large-scale data processing, we analyze the incremental aggregated gradient method for minimizing a sum of strongly convex functions from a novel perspective, simplifying the global convergence proofs considerably and proving a linear rate result. We also consider an aggregated method with momentum and show its linear conver...

متن کامل

On the Convergence Rate of Incremental Aggregated Gradient Algorithms

Motivated by applications to distributed optimization over networks and large-scale data processing in machine learning, we analyze the deterministic incremental aggregated gradient method for minimizing a finite sum of smooth functions where the sum is strongly convex. This method processes the functions one at a time in a deterministic order and incorporates a memory of previous gradient valu...

متن کامل

IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate

The problem of minimizing an objective that can be written as the sum of a set of n smooth and strongly convex functions is challenging because the cost of evaluating the function and its derivatives is proportional to the number of elements in the sum. The Incremental Quasi-Newton (IQN) method proposed here belongs to the family of stochastic and incremental methods that have a cost per iterat...

متن کامل

Linear Convergence of Proximal Incremental Aggregated Gradient Methods under Quadratic Growth Condition

Under the strongly convex assumption, several recent works studied the global linear convergence rate of the proximal incremental aggregated gradient (PIAG) method for minimizing the sum of a large number of smooth component functions and a non-smooth convex function. In this paper, under the quadratic growth condition–a strictly weaker condition than the strongly convex assumption, we derive a...

متن کامل

Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate

Recently, there has been growing interest in developing optimization methods for solving large-scale machine learning problems. Most of these problems boil down to the problem of minimizing an average of a finite set of smooth and strongly convex functions where the number of functions n is large. Gradient descent method (GD) is successful in minimizing convex problems at a fast linear rate; ho...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: SIAM Journal on Optimization

سال: 2019

ISSN: 1052-6234,1095-7189

DOI: 10.1137/17m1147846